# Multi-task evaluation

Thanos Picking Power Gem
Apache-2.0
A robot strategy model trained based on LeRobot for the task of picking up the power gem.
Multimodal Fusion Safetensors
T
pepijn223
188
0
GIST Embedding V0
MIT
GIST-Embedding-v0 is a sentence embedding model based on sentence-transformers, mainly used for sentence similarity calculation and feature extraction tasks.
Text Embedding English
G
avsolatorio
252.21k
26
Deepseek R1 Quantized.w4a16
MIT
INT4 weight-quantized version of DeepSeek-R1, reducing GPU memory and disk space requirements by approximately 50% while maintaining original model performance.
Large Language Model
D
RedHatAI
119
4
Latxa Llama 3.1 70B Instruct FP8
Latxa is a 70B-parameter Basque language large language model based on Llama-3.1, fine-tuned with instructions and FP8 quantization, specifically optimized for Basque
Large Language Model Transformers
L
HiTZ
988
1
Latxa Llama 3.1 70B Instruct
Latxa 3.1 70B Instruct is an instruction-tuned version based on Llama-3.1 (Instruct), specifically optimized for Basque language, demonstrating excellent performance on multiple Basque benchmarks.
Large Language Model Transformers Supports Multiple Languages
L
HiTZ
59
3
Gte Small Q8 0 GGUF
MIT
GTE-small is an efficient sentence embedding model based on the thenlper/gte-small foundation model, focusing on sentence similarity tasks.
Text Embedding English
G
ggml-org
66
1
Lamarckvergence 14B
Apache-2.0
Lamarckvergence-14B is a pre-trained language model merged via mergekit, combining Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose-DS. It ranks first among models with fewer than 15B parameters on the Open LLM Leaderboard.
Large Language Model Transformers English
L
suayptalha
15.36k
24
LENS D4000
Apache-2.0
LENS-4000 is a transformer-based text embedding model focused on feature extraction and sentence similarity calculation, excelling in multiple text classification tasks.
Text Embedding Transformers
L
yibinlei
19
1
Dunzhang Stella En 400M V5
MIT
Stella 400M is a medium-scale English text processing model focused on classification and information retrieval tasks.
Text Classification Transformers Other
D
Marqo
17.20k
7
Wiroai Turkish Llm 8b
Apache-2.0
A Turkish large language model developed by WiroAI, fine-tuned based on Llama-3.1-8B-Instruct, specializing in Turkish text generation and dialogue tasks.
Large Language Model Transformers Other
W
WiroAI
3,117
9
Bielik 11B V2
Apache-2.0
Bielik-11B-v2 is a generative text model with 11 billion parameters, specifically developed and trained for Polish language text. It is initialized based on Mistral-7B-v0.2 and trained on 400 billion tokens.
Large Language Model Transformers Other
B
speakleash
690
40
Meltemi 7B Instruct V1.5
Apache-2.0
Meltemi 7B Instruct v1.5 is a Greek instruction fine-tuned large language model improved based on Mistral 7B, focusing on Greek natural language processing tasks.
Large Language Model Transformers
M
ilsp
1,237
21
Llama 3 Instruct 8B SPPO Iter3
Apache-2.0
A large language model developed in the third iteration using the Self-Play Preference Optimization method based on the Meta-Llama-3-8B-Instruct architecture.
Large Language Model Transformers English
L
UCLA-AGI
8,539
83
Jina Embeddings V2 Base Zh
Apache-2.0
Jina Embeddings V2 Base is a sentence embedding model optimized for Chinese, which can convert text into high-dimensional vector representations for calculating sentence similarity and feature extraction.
Text Embedding Supports Multiple Languages
J
silverjam
63
1
Openelm 1 1B Instruct
OpenELM is a set of open-source efficient language models that use a hierarchical scaling strategy to efficiently allocate parameters in each layer of the Transformer model, thereby improving model accuracy.
Large Language Model Transformers
O
apple
1.5M
62
Dmeta Embedding Zh Small
Apache-2.0
Dmeta-embedding-zh-small is a model that performs excellently in multiple natural language processing tasks and is particularly suitable for Chinese text processing.
Text Embedding Transformers
D
DMetaSoul
10.76k
16
Vortex 3b
Other
vortex-3b is a 2.78 billion parameter causal language model developed by OEvortex, based on the Pythia-2.8b model and fine-tuned on the Vortex-50k dataset.
Large Language Model Transformers English
V
OEvortex
16
5
Bge Micro V2
bge_micro is a sentence embedding model based on sentence-transformers, focusing on sentence similarity calculation and feature extraction tasks.
Text Embedding Transformers
B
SmartComponents
468
2
Mmlw Roberta Large
Apache-2.0
A large-scale Polish sentence transformation model based on the RoBERTa architecture, focusing on sentence similarity calculation and feature extraction tasks.
Text Embedding Transformers Other
M
sdadas
5,007
13
Mmlw Roberta Base
Apache-2.0
A Polish sentence embedding model based on RoBERTa architecture, focusing on sentence similarity calculation and feature extraction tasks.
Text Embedding Transformers Other
M
sdadas
106.30k
3
Mmlw E5 Small
Apache-2.0
mmlw-e5-small is a sentence transformer model for sentence similarity tasks, supporting Polish text processing.
Text Embedding Transformers Other
M
sdadas
76
0
St Polish Kartonberta Base Alpha V1
This is a Polish sentence transformer model based on the KartonBERTa architecture, primarily used for sentence similarity calculation and feature extraction tasks.
Text Embedding Transformers Other
S
OrlikB
3,494
3
Stella Base Zh V2
stella-base-zh-v2 is a Chinese semantic similarity calculation model based on sentence transformers, supporting various text similarity tasks and evaluation benchmarks.
Text Embedding
S
infgrad
95
15
Bge Micro V2
MIT
bge_micro is a lightweight model focused on sentence similarity calculation, suitable for various natural language processing tasks.
Text Embedding Transformers
B
TaylorAI
248.53k
46
Bge Micro
bge_micro is a lightweight sentence similarity calculation model based on transformer architecture, specifically designed for efficient feature extraction and sentence similarity tasks.
Text Embedding Transformers
B
TaylorAI
1,799
23
Puma 3B
Apache-2.0
Puma-3B is a text generation model fine-tuned based on OpenLLaMA 3B V2. It is trained on the ShareGPT Hyperfiltered dataset and is suitable for various text generation tasks.
Large Language Model Transformers English
P
acrastt
427
4
Open Llama 13b
Apache-2.0
OpenLLaMA is an open-source reproduction of Meta AI's LLaMA large language model, offering pre-trained models with 3B, 7B, and 13B parameters
Large Language Model Transformers
O
openlm-research
1,300
455
Polyglot Ko 1.3b
Apache-2.0
Polyglot-Ko is one of the Korean autoregressive language model series developed by EleutherAI's multilingual team, containing 1.3 billion parameters and specifically optimized for Korean.
Large Language Model Transformers Korean
P
EleutherAI
121.13k
83
Bert Large Arabertv02
AraBERT is an Arabic pre-trained language model based on the BERT architecture, optimized for Arabic natural language understanding tasks.
Large Language Model Arabic
B
aubmindlab
2,444
9
SGPT 5.8B Weightedmean Msmarco Specb Bitfit
SGPT-5.8B is a sentence transformer model based on the weighted mean method, focusing on sentence similarity tasks, trained on the msmarco dataset and optimized with specb-bitfit technology
Text Embedding
S
Muennighoff
164
23
SGPT 2.7B Weightedmean Msmarco Specb Bitfit
SGPT-2.7B is a sentence transformer model based on the weighted mean method, focusing on sentence similarity tasks, trained on the MSMARCO dataset with BitFit technology applied.
Text Embedding
S
Muennighoff
85
3
Fairseq Dense 125M
This is a Hugging Face transformers-compatible version conversion of the 125M parameter dense model from Artetxe et al.'s paper
Large Language Model Transformers English
F
KoboldAI
27
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase